# Context Compression
Cocom V1 128 Mistral 7b
COCOM is an efficient context compression method that compresses long contexts into a small number of context embeddings, significantly accelerating the generation time for QA tasks.
Large Language Model
Transformers English

C
naver
53
4
Cocom V1 4 Mistral 7b
COCOM is an efficient context compression method that compresses long contexts into a small number of context embeddings, thereby accelerating the generation time for question-answering tasks.
Large Language Model
Transformers English

C
naver
17
2
Featured Recommended AI Models